66 research outputs found
Recommended from our members
3-D Quantitative Vascular Shape Analysis for Arterial Bifurcations via Dynamic Tube Fitting
Reliable and reproducible estimation of vessel centerlines and reference surfaces is an important step for the assessment of luminal lesions. Conventional methods are commonly developed for quantitative analysis of the âstraightâ vessel segments and have limitations in defining the precise location of the centerline and the reference lumen surface for both the main vessel and the side branches in the vicinity of bifurcations. To address this, we propose the estimation of the centerline and the reference surface through the registration of an elliptical cross-sectional tube to the desired constituent vessel in each major bifurcation of the arterial tree. The proposed method works directly on the mesh domain, thus alleviating the need for image upsampling, usually required in conventional volume domain approaches. We demonstrate the efficiency and accuracy of the method on both synthetic images and coronary CT angiograms. Experimental results show that the new method is capable of estimating vessel centerlines and reference surfaces with a high degree of agreement to those obtained through manual delineation. The centerline errors are reduced by an average of 62.3% in the regions of the bifurcations, when compared to the results of the initial solution obtained through the use of mesh contraction method
Recommended from our members
Automatic Segmentation of Coronary Arteries in CT Imaging in the Presence of Kissing Vessel Artifacts
In this paper, we present a novel two-step algorithm for segmentation of coronary arteries in computed tomography images based on the framework of active contours. In the proposed method, both global and local intensity information is utilized in the energy calculation. The global term is defined as a normalized cumulative distribution function, which contributes to the overall active contour energy in an adaptive fashion based on image histograms, to deform the active contour away from local stationary points. Possible outliers, such as kissing vessel artifacts, are removed in the postprocessing stage by a slice-by-slice correction scheme based on multiregion competition, where both arteries and kissing vessels are identified and tracked through the slices. The efficiency and the accuracy of the proposed technique are demonstrated on both synthetic and real datasets. The results on clinical datasets show that the method is able to extract the major branches of arteries with an average distance of 0.73 voxels to the manually delineated ground truth data. In the presence of kissing vessel artifacts, the outer surface of the entire coronary tree, extracted by the proposed algorithm, is smooth and contains fewer erroneous regions, originating in kissing vessel artifacts, as compared to the initial segmentation
Recommended from our members
Stimulation and measurement patterns versus prior information for fast 3D EIT: A breast screening case study
Imposing prior information is a typical strategy in inverse problems in return for a stable numerical algorithm. For a given imaging system configuration, Picard's stability condition could be deployed as a practical measure of the performance of the system against various priors and noise contaminated measurements. Herein, we make extensive use of this measure to quantify the performance of impedance imaging systems for various injection patterns. In effect, we numerically demonstrate that by varying electrode distributions and numbers, little improvement, if any, in the performance of the impedance imaging system is recorded. In contrast, by using groups of electrodes in the 3D current injection process, a step increase in performance is obtained. Numerical results on a female breast phantom reveal that the performance measure of the imaging system is 15% for a conventional combination of stimulation and prior information, 61% for groups of electrodes and the same prior and 97% for groups of electrodes and a more accurate prior. Finally, since a smaller number of electrodes is involved in the measurement process, a smaller number of measurements is acquired. However, no compromise in the quality of the reconstructed images is observed
Recommended from our members
Development and testing of a prototype instrumented bicycle model for the prevention of cyclist accidents
Cycling is an increasingly popular mode of travel in cities owing to the great advantages that it offers in terms of space consumption, health and environmental sustainability, and is therefore favoured and promoted by many city authorities worldwide. However, cycling is also perceived as relatively unsafe, and therefore it has yet to be adopted as a viable alternative to the private car. Rising accident numbers, unfortunately, confirm this perception as reality, with a particular source of hazard (and a significant proportion of collisions) appearing to originate from the interaction of cyclists with Heavy Vehicles (HVs). This paper introduces Cyclist 360° Alert, a novel technological solution aimed at tackling this problem and ultimately improving the safety of cyclists and promoting it to non-riders. Following a thorough review of the trends of cyclist collisions, which sets the motivation of the research, the paper goes on to present the Cyclist 360° Alert system architecture design, and examines possible technologies and techniques that can be employed in the accurate positioning of cyclists and vehicles. It then focuses in particular on the aspect of bicycle tracking, and proposes a localisation approach based on micro-electromechanical systems (MEMS) sensor configurations. Initial experimental results from a set of controlled experiments using a purpose-developed prototype bicycle simulator model, are reported, and conclusions on the applicability of specific sensor configurations are drawn, both in terms of sensor accuracy and reliability in taking sample measurements of motion
Analyzing Learners Behavior in MOOCs: An Examination of Performance and Motivation Using a Data-Driven Approach
Massive Open Online Courses (MOOCs) have been experiencing increasing use and popularity in highly ranked universities in recent years. The opportunity of accessing high quality courseware content within such platforms, while eliminating the burden of educational, financial and geographical obstacles has led to a rapid growth in participant numbers. The increasing number and diversity of participating learners has opened up new horizons to the research community for the investigation of effective learning environments. Learning Analytics has been used to investigate the impact of engagement on student performance. However, extensive literature review indicates that there is little research on the impact of MOOCs, particularly in analyzing the link between behavioral engagement and motivation as predictors of learning outcomes. In this study, we consider a dataset, which originates from online courses provided by Harvard University and Massachusetts Institute of Technology, delivered through the edX platform [1]. Two sets of empirical experiments are conducted using both statistical and machine learning techniques. Statistical methods are used to examine the association between engagement level and performance, including the consideration of learner educational backgrounds. The results indicate a significant gap between success and failure outcome learner groups, where successful learners are found to read and watch course material to a higher degree. Machine learning algorithms are used to automatically detect learners who are lacking in motivation at an early time in the course, thus providing instructors with insight in regards to student withdrawal
Recommended from our members
Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images
We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes
Recommended from our members
Automating the processing of cDNA microarray images
This work is concerned with the development of an automatic image processing tool for DNA microarray images. This paper proposes, implements and tests a new tool for cDNA image analysis. The DNAs are imaged as thousands of circularly shaped objects (spots) on the microarray image and the purpose of this tool is to correctly address their location, segment the pixels belonging to spots and extract the quality features of each spot. Techniques used for addressing, segmentation and feature extraction of spots are described in detail. The results obtained with the proposed tool are systematically compared with conventional cDNA microarray analysis software tools
Recommended from our members
A Hybrid Energy Model for Region Based Curve Evolution - Application to CTA Coronary Segmentation
Background and Objective: State-of-the-art medical imaging techniques have enabled non-invasive imaging of the internal organs. However, high volumes of imaging data make manual interpretation and delineation of abnormalities cumbersome for clinicians. These challenges have driven intensive research into efficient medical image segmentation. In this work, we propose a hybrid region-based energy formulation for effective segmentation in computed tomography angiography (CTA) imagery.
Methods: The proposed hybrid energy couples an intensity-based local term with an efficient discontinuity-based global model of the image for optimal segmentation. The segmentation is achieved using a level set formulation due to the computational robustness. After validating the statistical significance of the hybrid energy, we applied the proposed model to solve an important clinical problem of 3D coronary segmentation. An improved seed detection method is used to initialize the level set evolution. Moreover, we employed an auto-correction feature that captures the emerging peripheries during the curve evolution for completeness of the coronary tree.
Results: We evaluated the segmentation accuracy of the proposed energy model against the existing techniques in two stages. Qualitative and quantitative results demonstrate the effectiveness of the proposed framework with a consistent mean sensitivity and specificity measures of 80% across the CTA data. Moreover, a high degree of agreement with respect to the inter-observer differences justifies the generalization of the proposed method.
Conclusions: The proposed method is effective to segment the coronary tree from the CTA volume based on hybrid image based energy, which can improve the clinicians ability to detect arterial abnormalities
A new machine learning based approach to predict Freezing of Gait
Freezing of gait (FoG) is a motor symptom of Parkinsonâs disease (PD) that frequently occurs in the long-term sufferers of the disease. FoG may result to nursing home admission as it can lead to falls, and therefore, it impacts negatively on the quality of life. The focus of this study is the systematic evaluation of machine learning techniques in conjunction with varying size time windows and time/frequency domain feature sets in predicting a FoG event before its onset. In the experiments, the Daphnet FoG dataset is used to benchmark performance. This consists of accelerometer signals obtained from sensors mounted on the ankle, thigh and trunk of the PD patients. The dataset is annotated with instances of normal activity events, and FoG events. To predict the onset of FoG, the dataset is augmented with an additional class, termed âtransitionâ, which relates to a manually defined period prior to the occurrence of a FoG episode. In this research, five machine learning models are used, namely, Random Forest, Extreme Gradient Boosting, Gradient Boosting, Support Vector Machines using Radial Basis Functions, and Neural Networks. Support Vector Machines with Radial Basis kernels provided the best performance achieving sensitivity values of 72.34%, 91.49%, 75.00%, and specificity values of 87.36%, 88.51% and 93.62%, for the FoG, transition and normal activity classes, respectivel
An Efficient Queries Processing Model Based on Multi Broadcast Searchable Keywords Encryption (MBSKE)
Cloud computing is a technology which has enabled many organizations to outsource their data in an encrypted form to improve processing times. The public Internet was not initially designed to handle massive quantities of data flowing through millions of networks. So the rapid increase of broadcast users and the growth of the amount broadcasted information leads to slow sending quires and receiving encrypted data from the cloud. In order to solve this problem Next Generation Internet (NGI) is developed with high speed, while keeping the privacy of data. This research proposes a novel search algorithm called Multi-broadcast Searchable Keywords Encryption, which processes queries having a set of keywords. This set of keywords is sent from the users to the cloud server in an encrypted form, thus hiding all information about the user or the content of the queries from the cloud server. The proposed method uses caching algorithm and provide an improvement of 40% in terms of runtime and trapdoor. In addition, the method minimizes computational costs, complexity, and maximizes throughput, in the cloud environment, whilst maintaining privacy and confidentiality of both the user and the cloud. The cloud returns encrypted query results to the user, where data is decrypted using the usersâ private keys
- âŠ